On adaptive smoothing in kernel discriminant analysis
نویسندگان
چکیده
One popular application of kernel density estimation is in kernel discriminant analysis, where kernel estimates of population densities are plugged in the Bayes rule to develop a nonparametric classifier. Performance of these kernel density estimates and that of the corresponding classifier depend on the values of associated smoothing parameters commonly known as the bandwidths. Bandwidths that minimize mean integrated square errors of kernel density estimates often lead to poor misclassification rates in classification problems. In discriminant analysis, usually a cross validated estimate of misclassification probability is minimized to find the optimal bandwidth, and that bandwidth is used for classifying all observations. However, in addition to depending on the training data set, a good choice of bandwidth should also depend on the specific observation to be classified. Therefore, instead of fixing the value of the bandwidth parameter, in practice it may be more useful to choose it adaptively. This article presents one such adaptive classification technique, where the bandwidth is chosen based on the training sample and also on the data point to be classified. Performance of the proposed method has been illustrated using some benchmark data sets.
منابع مشابه
Multi-scale Kernel Discriminant Analysis
The bandwidth that minimizes the mean integrated square error of a kernel density estimator may not always be good when the density estimate is used for classification purpose. On the other hand cross-validation based techniques for choosing bandwidths may not be computationally feasible when there are many competing classes. Instead of concentrating on a single optimum bandwidth for each popul...
متن کاملBayesian multiscale smoothing in supervised and semi-supervised kernel discriminant analysis
In kernel discriminant analysis, it is common practice to select the smoothing parameter (bandwidth) based on the training data and use it for classifying all unlabeled observations. But this method of selecting a single scale of smoothing ignores the major issue of model uncertainty. Moreover, in addition to depending on the training sample, a good choice of bandwidth may also depend on the ob...
متن کاملAdaptive Quasiconformal Kernel Fisher Discriminant Analysis via Weighted Maximum Margin Criterion
Kernel Fisher discriminant analysis (KFD) is an effective method to extract nonlinear discriminant features of input data using the kernel trick. However, conventional KFD algorithms endure the kernel selection problem as well as the singular problem. In order to overcome these limitations, a novel nonlinear feature extraction method called adaptive quasiconformal kernel Fisher discriminant ana...
متن کاملFisher’s Linear Discriminant Analysis for Weather Data by reproducing kernel Hilbert spaces framework
Recently with science and technology development, data with functional nature are easy to collect. Hence, statistical analysis of such data is of great importance. Similar to multivariate analysis, linear combinations of random variables have a key role in functional analysis. The role of Theory of Reproducing Kernel Hilbert Spaces is very important in this content. In this paper we study a gen...
متن کاملAdaptive quasiconformal kernel discriminant analysis
Kernel discriminant analysis (KDA) is effective to extract nonlinear discriminative features of input samples using the kernel trick. However, the conventional KDA algorithm endures the kernel selection which has significant impact on the performances of KDA. In order to overcome this limitation, a novel nonlinear feature extraction method called adaptive quasiconformal kernel discriminant anal...
متن کامل